Results

PART I

In this section, learning data were split into first half and second half and fit separately. The two set-sizes were fit together. Are learning outcomes for the 2 halves correlated?

#> Analysis of Variance Table
#> 
#> Response: mean.acc
#>                 Df  Sum Sq Mean Sq F value Pr(>F)    
#> half             1  0.0150 0.01504  0.9589 0.3278    
#> condition        3  2.7371 0.91238 58.1842 <2e-16 ***
#> half:condition   3  0.0589 0.01962  1.2511 0.2903    
#> Residuals      656 10.2866 0.01568                   
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation between First half and second half of learning outcomes
condition r p.val
set3_learn 0.341 0.002
set3_test 0.576 0.000
set6_learn 0.667 0.000
set6_test 0.655 0.000

Are test accuracies higher in half2 compared to half1?

#> # A tibble: 2 × 3
#>   half     s3     s6
#>   <chr> <dbl>  <dbl>
#> 1 half1 0.180 0.0986
#> 2 half2 0.147 0.0596

Correlation between First half and second half of learning rate
condition n r p.val
set3_learn 83 0.130 0.241
set6_learn 83 0.258 0.018

How many participants have both halves fit the same model as Experiment 1?

We find that 55.4% of participants have halves 1 and 2 that fit the same model as in experiment 1. For 61.4% of participants, the first half fit the same model as in Ex1, and, 66.3% for the second half.

How many participants fit different models in the two halves?

#> function (x, ..., wt = NULL, sort = FALSE, name = NULL) 
#> {
#>     UseMethod("count")
#> }
#> <bytecode: 0x7fddb44638f8>
#> <environment: namespace:dplyr>

Comparing the first half with the second half, 73.49% of participants fit the same models in the second half. This means that….[think].

For those participants who fit different models in the two halves, were learning outcomes affected?

These scatter plots show the differences in the distributions for learning outcomes for subjects who fit the same models for H1 and H2 and different

#> # A tibble: 4 × 3
#>   condition   t_val p_val
#>   <chr>       <dbl> <dbl>
#> 1 set3_learn  0.991 0.329
#> 2 set3_test  -1.28  0.206
#> 3 set6_learn  0.729 0.470
#> 4 set6_test  -1.01  0.320

#> # A tibble: 4 × 3
#>   condition    t_val p_val
#>   <chr>        <dbl> <dbl>
#> 1 set3_learn  1.04   0.303
#> 2 set3_test  -0.114  0.909
#> 3 set6_learn -0.0821 0.935
#> 4 set6_test  -1.24   0.219

#> Analysis of Variance Table
#> 
#> Response: mean.acc
#>                      Df  Sum Sq Mean Sq F value Pr(>F)    
#> condition             3  2.7371 0.91238 57.9458 <2e-16 ***
#> comp                  1  0.0001 0.00007  0.0046 0.9459    
#> half                  1  0.0150 0.01504  0.9549 0.3288    
#> condition:comp        3  0.0423 0.01409  0.8948 0.4435    
#> condition:half        3  0.0589 0.01962  1.2460 0.2921    
#> comp:half             1  0.0046 0.00463  0.2938 0.5880    
#> condition:comp:half   3  0.0367 0.01222  0.7763 0.5075    
#> Residuals           648 10.2030 0.01575                   
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

PART II

As described above, learning data for all participants were split into the first 6 stimulus iterations and the last 6 before model fitting. Additionally, the two conditions, set-size 3 and set-size 6 were also fit separately.

#> # A tibble: 10 × 8
#>    subjects name     mod.id  model index condition  parameter param_vals
#>       <int> <chr>    <chr>   <chr> <int> <chr>      <chr>          <dbl>
#>  1     6209 half1_N3 LTM_105 LTM     105 set3_learn alpha        NaN    
#>  2     6209 half1_N3 LTM_105 LTM     105 set3_learn egs          NaN    
#>  3     6209 half1_N3 LTM_105 LTM     105 set3_learn bll            1.41 
#>  4     6209 half1_N3 LTM_105 LTM     105 set3_learn imag          -1.41 
#>  5     6209 half1_N3 LTM_105 LTM     105 set3_learn ans            1.41 
#>  6     6209 half1_N6 LTM_79  LTM      79 set6_learn alpha        NaN    
#>  7     6209 half1_N6 LTM_79  LTM      79 set6_learn egs          NaN    
#>  8     6209 half1_N6 LTM_79  LTM      79 set6_learn bll            0.704
#>  9     6209 half1_N6 LTM_79  LTM      79 set6_learn imag          -1.41 
#> 10     6209 half1_N6 LTM_79  LTM      79 set6_learn ans            0.704

Overview of modelfitting results

Figure x

Figure x

We found that the LTM model still fit more subjects in both halves (first half- LTM: M = 52.5, RL: M = 6.5, META: M = 11.5, STR: ; second half- LTM: M = 46, RL: M = 12.5, META: M = 13, STR: ), and conditions (set-size 3 - LTM: M = 48.5, RL: M = 12, META: M = 11.5, STR: M = 11; set-size 6 - LTM: M = 50, RL: M = 7, META: M = 13, STR: M = 13) much like the results obtained through the model fitting procedure in experiment 1 (Figure 1). Furthermore, more subjects fit the LTM model in the set-size 3 condition compared to the set-size 6 condition (more in the first half than in the second half, for both), which means, for those subjects that fit the RL model, higher numbers of subjects fit RL model for set-size 6 conditions than set-size 3 (more in the second half than in the first half for both conditions). This trend aligns more with Collins (2018) findings but these results do not take into account individual dynamics (covered in detail below).

If you look at the 2nd half, what % of people have the same model for 3 and 6, and how many fit different models?

#> # A tibble: 4 × 4
#> # Groups:   set_size [2]
#>   set_size condition t_stat p.val
#>   <chr>    <chr>      <dbl> <dbl>
#> 1 set3     learn      0.649 0.520
#> 2 set3     test      -0.238 0.813
#> 3 set6     learn      0.326 0.746
#> 4 set6     test      -0.968 0.339

for the group that swithced, what model fit them best in the second half for set3 and 6?